-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multiple GPUs using DataParallel #484
Conversation
# Conflicts: # mala/network/trainer.py
Thanks for the PR! I really like this and I think we should implement this, before we move to DDP (which is the next thing on my todo list), so I just had a look at this PR.
I have made these changes in a PR here: nerkulec#1, if that looks OK we could first merge that PR and then this one here. |
Oh wait, there is one potential problem, and that is when training with multiple GPUs and then loading to run with either only one GPU or MPI+GPU. I will test this right away! |
Adjustments to multiple GPU implementation
I like your changes :) |
# Conflicts: # mala/descriptors/descriptor.py
I confirmed that inference pipeline indeed still works! |
In theory this works, once this is benchmarked we can merge it! |
The benchmarks showed that training on multi-gpu using DataParallel was slower than with one gpu. I'm closing this since we have the DDP implementation. |
This allows for utilization of multiple GPUs for training of MALA models.
It is done with DataParallel, which in contrast to DistributedDataParallel does not require multiprocessing.
Simply use
No additional changes to python or slurm scripts are needed.